Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks

نویسندگان

  • Marwan A. Jabri
  • Barry Flower
چکیده

Previous work on analog VLSI implementation of multilayer perceptrons with on-chip learning has mainly targeted the implementation of algorithms such as back-propagation. Although back-propagation is efficient, its implementation in analog VLSI requires excessive computational hardware. It is shown that using gradient descent with direct approximation of the gradient instead of back-propagation is more economical for parallel analog implementations. It is shown that this technique (which is called ;weight perturbation') is suitable for multilayer recurrent networks as well. A discrete level analog implementation showing the training of an XOR network as an example is presented.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Analog VLSI Stochastic Perturbative Learning Architectures∗

We present analog VLSI neuromorphic architectures for a general class of learning tasks, which include supervised learning, reinforcement learning, and temporal difference learning. The presented architectures are parallel, cellular, sparse in global interconnects, distributed in representation, and robust to noise and mismatches in the implementation. They use a parallel stochastic perturbatio...

متن کامل

Neuro-Optimizer: A New Artificial Intelligent Optimization Tool and Its Application for Robot Optimal Controller Design

The main objective of this paper is to introduce a new intelligent optimization technique that uses a predictioncorrectionstrategy supported by a recurrent neural network for finding a near optimal solution of a givenobjective function. Recently there have been attempts for using artificial neural networks (ANNs) in optimizationproblems and some types of ANNs such as Hopfield network and Boltzm...

متن کامل

VLSI neural network with digital weights and analog multipliers

A VLSI feedforward neural network is presented that makes use of digital weights and analog multipliers. The network is trained in a chip-in-loop fashion with a host computer implementing the training algorithm. The chip uses a serial digital weight bus implemented by a long shift register to input the weights. The inputs and outputs of the network are provided directly at pins on the chip. The...

متن کامل

Analog Computation and Learning in VLSI

Nature has evolved highly advanced systems capable of performing complex computations, adaptation, and learning using analog components. Although digital systems have significantly surpassed analog systems in terms of performing precise, high speed, mathematical computations, digital systems cannot outperform analog systems in terms of power. Furthermore, nature has evolved techniques to deal w...

متن کامل

VLSI Realization of Artificial Neural Networks with Improved Fault Tolerance

The feed forward neural network which is a model of the cerebral neural network has in-built fault tolerance. The conventional back-propagation algorithm reduces errors between the learning examples and the output of a multilayer neural network (MNN). However, it is not assured that the MNN behaves in the same manner when faults occur. For these reasons the study of fault tolerance in artificia...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE transactions on neural networks

دوره 3 1  شماره 

صفحات  -

تاریخ انتشار 1991